Goto

Collaborating Authors

 article 19


A Speculation and Analysis of the Freedom of Speech of Artificial Intelligences

#artificialintelligence

"Milton's voice was not stifled or choked for making Satan a heroic figure… in fact, in "Areopagitica", the blind poet champions free speech: Give me the liberty to know, to utter and to argue freely according to conscience above all liberties…. Artificial Intelligence may, in the foreseeable future, be an entity that thinks independently enough to enjoy the pleasures of expression the way humanity does. Whether it may have the potential evils of a Satan is to be seen, but its ontology and a legal framework to accommodate it may be in order. Morality aside, it may be reasonably seen that it is a power that can possibly "make a heaven of hell, a hell of heaven" [2] of the world. It may be pointed out that the possibility of "strong artificial intelligence" -- artificial intelligence constructs that are comparable to a human brain, with features like consciousness that are identified with being human,[3] are entirely hypothetical and may remain so.


Smile for the camera: dark side of China's emotion-recognition tech

The Guardian

"Ordinary people here in China aren't happy about this technology but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it. So says Chen Wei at Taigusys, a company specialising in emotion recognition technology, the latest evolution in the broader world of surveillance systems that play a part in nearly every aspect of Chinese society. Emotion-recognition technologies – in which facial expressions of anger, sadness, happiness and boredom, as well as other biometric data are tracked – are supposedly able to infer a person's feelings based on traits such as facial muscle movements, vocal tone, body movements and other biometric signals. It goes beyond facial-recognition technologies, which simply compare faces to determine a match. But similar to facial recognition, it involves the mass collection of sensitive personal data to track, monitor and profile people and uses machine learning to analyse expressions and other clues. The industry is booming in China, where since at least 2012, figures including President Xi Jinping have emphasised the creation of "positive energy" as part of an ideological campaign to encourage certain kinds of expression and limit others. Critics say the technology is based on a pseudo-science of stereotypes, and an increasing number of researchers, lawyers and rights activists believe it has serious implications for human rights, privacy and freedom of expression. With the global industry forecast to be worth nearly $36bn by 2023, growing at nearly 30% a year, rights groups say action needs to be taken now. The main office of Taigusys is tucked behind a few low-rise office buildings in Shenzhen. Visitors are greeted at the doorway by a series of cameras capturing their images on a big screen that displays body temperature, along with age estimates, and other statistics. Chen, a general manager at the company, says the system in the doorway is the company's bestseller at the moment because of high demand during the coronavirus pandemic. Chen hails emotion recognition as a way to predict dangerous behaviour by prisoners, detect potential criminals at police checkpoints, problem pupils in schools and elderly people experiencing dementia in care homes. Taigusys systems are installed in about 300 prisons, detention centres and remand facilities around China, connecting 60,000 cameras. "Violence and suicide are very common in detention centres," says Chen. "Even if police nowadays don't beat prisoners, they often try to wear them down by not allowing them to fall asleep.


China's growing use of emotion recognition tech raises rights concerns

The Japan Times

Technology that measures emotions based on biometric indicators such as facial movements, tone of voice or body movements is increasingly being marketed in China, researchers say, despite concerns about its accuracy and wider human rights implications. Drawing upon artificial intelligence, the tools range from cameras to help police monitor a suspect's face during an interrogation to eye-tracking devices in schools that identify students who are not paying attention. A report released this week from U.K.-based human rights group Article 19 identified dozens of companies offering such tools in the education, public security and transportation sectors in China. "We believe that their design, development, deployment, sale and transfers should be banned due to the racist foundations and fundamental incompatibility with human rights," said Vidushi Marda, a senior program officer at Article 19. Human emotions cannot be reliably measured and quantified by technology tools, said Shazeda Ahmed, a doctoral candidate studying cybersecurity at the University of California, Berkeley and the report's co-author. Such systems can perpetuate bias, especially those sold to police that purport to identify criminality based on biometric indicators, she added.


UN HRC42: action to protect privacy and address artificial intelligence among key priorities - ARTICLE 19

#artificialintelligence

On 9 September 2019, the UN Human Rights Council begins its 42nd Session in Geneva (HRC42). Over 3 weeks, major human rights issues will be debated and acted on, with significant implications for the protection of freedom of expression and right to information globally. The UN Human Rights Council, with its 47 Member States, is an essential forum for the protection of freedom of expression, in particular for the rights of journalists, human rights defenders, and minorities and groups facing discrimination. As stakeholders prepare for HRC42, the UN's Human Rights Chief, Michelle Bachelet, has set out a series of thematic priorities for States to act upon, including to reverse shrinking civic space for protesters and dissenters, to push back against heavy censorship of the Internet and attacks on digital rights, and to end killings of human rights defenders, journalists, and trade unionists. As attacks on the multilateral system intensify, with autocrats even resorting to thuggish and personal jibes at Bachelet herself, it is crucial that rights-respecting States demonstrate that the Council can still deliver strong outcomes for freedom of expression.


Artificial Intelligence: ARTICLE 19 calls for protection of freedom… · Article 19

#artificialintelligence

ARTICLE 19 submitted evidence to the United Kingdom's House of Lords Select Committee on Artificial Intelligence on 6 September 2017. The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. It also calls for deeper understanding of various ways in which these technologies embed values and bias, thereby strengthening or sometimes hindering the exercise of these rights, particularly freedom of expression. The overarching recommendation is for the development and use of AI to be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards. Since 2014, ARTICLE 19 has pioneered efforts in technical communities to bridge existing knowledge gaps on human rights and their relevance in internet infrastructure.